Mathematical Modeling 1

Introduction

Felix Schönbrodt

Ludwig-Maximilians-Universität München

2024-11-08

Answer: -136.7

Step 1: Define variables

Define variables

How deep is your love?

Define a variable for each construct of the VAST display. These can be measured variables or unmeasured (mediating) variables.

You can either refer to an actual measurement procedure or simply define a variable. In both cases you should explicitly define the following properties:

  • variable name (i.e., to which concept in your VAST display does it refer?),
  • scale level (categorical, ordinal, interval scale),
  • range of possible values (e.g., 0 … 1),
  • semantic anchors (e.g., 0 = “complete absence”, 1 = “maximal possible value”)

Define variables

Some guiding questions and heuristics

  • Scale level of measurement (Stevens’s typology):
    Nominal → Ordinal → Interval → Ratio
  • Is the variable naturally bounded? On both sides or only one?
  • How can the numbers be interpreted?
    • Natural/objective scale (e.g. physical distance)
    • As standardized z-scores?
    • Normalized to a value between 0 and 1? Or rather -1 to 1?
    • Can we find an empirical or semantic calibration?
      • Just noticable difference
      • 100 = “largest realistically imaginable quantity of that variable”

Group work (15 min.)

Specify variables

In the Google doc, below your Construct Source Table, create a new table with the following columns:

Example:
Construct in VAST display Scale level Range/values Anchors
Affective tone of instruction Continuous [-1; 1] -1 = maximally negative
0 = neutral
+1 = maximally positive
momentary anxiety Continuous [0; 1] 0 = no anxiety
1 = maximal anxiety
Kohlberg’s Stages of Moral Development Ordinal {1; 2; 3} 1=Pre-conventional
2=Conventional
3=Post-Conventional

Note: This resembles a codebook; but for theoretical variables, not for measured variables.

Step 2: Define functional relationships between variables

Group work (10 min.)

Sketch a first functional relationship on paper

We want to model the following phenomenon (a specific version of the bystander effect):

  1. Without other people present, the tendency (probability or frequency) that a person helps a victim is high.
  2. The tendency of helping a victim decreases monotonically with the number of other people (bystanders) present.
  3. The tendency of helping a victim never drops to 0.

Task: Sketch a first functional relationship that could model this phenomenon. Use the variables you defined in the previous step (including their labels and ranges).

Step 2: Define functional relationships between variables

Every causal path needs to be implemented as a mathematical function, where the dependent variable/output \(y\) is a function of the input variable(s) \(x_i\).

\(y = f(x_1, x_2, ..., x_i)\)

This can, for example, be a linear function, \(y = \beta_0 + \beta_1x_1\).

Step 2: Define functional relationships between variables

Fixed and free parameters

\(\color{red} y = \color{forestgreen} \beta_0 \color{black} + \color{forestgreen} \beta_1 \color{blue} x\)
\(\color{red} y\) = output variable, \(\color{forestgreen} \beta\)s = parameters, \(\color{blue} x\) = input variable.

Two types of parameters:

  • Fixed parameters: Parameters that are chosen a priori and do not change in the light of empirical data. Their values are based on previous research, theory, or external information (such as properties of the world).
  • Free parameters: Can be adjusted to optimize the fit of the model to data. They are estimated from empirical data.

Note

Virtually all parameters (except natural constants) could be imagined as being free. It is a choice to fix some of them in order to simplify the model.

Step 2: Define functional relationships between variables

Fixed and free parameters

Fixing a parameter:

\(\color{forestgreen} \beta_0 \color{black} = 1\)

\(\color{red} y = \color{forestgreen} 1 \color{black} + \color{forestgreen} \beta_1 \color{blue} x\)

That means, the slope \(\color{forestgreen} \beta_1\) still can vary, but the intercept is fixed to 1.

Free parameters give flexibility to your function: If you are unsure about the exact relationship between two variables, you can estimate the best-fitting parameters from the data.

For example, sometimes a theory specifies the general functional form of a relationship (e.g., “With increasing \(x\), \(y\) is monotonously decreasing”), but does not tell how fast this decrease happens, where \(y\) starts when \(x\) is minimal, etc. These latter decisions are then made by the free parameters.

Discussion

Sketch a first functional relationship on paper

  • Which aspects of your sketch could be free parameters? Describe them in plain language.
  • Draw a couple of alternative plots that would also fullfill the criteria of the phenomenon.
  • What would be the differences in the functional form? What is the semantic meaning of the free parameters?

Some mathematical tools

Tool 1: The logistic function family

As a linear function is unbounded, it can easily happen that the computed \(y\) exceeds the possible range of values.

If \(y\) has defined boundaries (e.g., \([0; 1]\)), a logistic function can bound the values between a lower and an upper limit (in the basic logistic function, between 0 and 1):

\(y = \frac{1}{1 + e^{-x}}\)

Tool 1: The logistic function family

With the 4PL* model from IRT, you can adjust the functional form to your needs, by:

  • shifting the inflection point from left to right (aka. “item difficulty”, parameter \(d\))
  • change the steepness of the S-shape (aka. “discrimination parameter”, parameter \(a\))
  • Move the lower asymptote up or down (parameter \(min\))
  • Move the upper limit up or down (parameter \(max\))
logistic <- function(x, d = 0, a = 1, min = 0, max = 1) {
 min + (max - min)/(1 + exp(a * (d - x)))
}

Tool 1: The logistic function family

(basic logistic function as dotted grey line)

Tool 1: The logistic function family

The d, a, min, and max parameters can be used to “squeeze” the S-shaped curve into the range of your variables. For example, if your \(x\) variable is defined on the range \([0; 1]\), the following function parameters lead to a reasonable shift:

Tool 2: The Distribution Zoo

If you start simulating data for your virtual participants, you want to simulate their starting values yb drawing random values from a distribution. For example, the virtual participants might differ in their anxiety, which you previously defined on the range \([0; 1]\).

How can you generate random values that roughly look like a normal distribution, but are bounded to the defined range?

For simulations, it is good to know some basic distributions. Here are three interactive resources for choosing your distribution:

Tool 3: The beta distribution

A handy distribution for the \([0; 1]\) range is the beta distribution. With its two parameters \(\alpha\) (also called \(a\) or shape1) and \(\beta\) (also called \(b\) or shape2), it can take many different forms:

Tool 3: The beta distribution

How to choose \(\alpha\) and \(\beta\)? Asking ChatGPT/Wolfram Alpha for assistance

Assume that you intuitively started with a normal distribution with \(M=0.2\) and \(SD=0.1\) (rnorm(mean=0.2, sd=0.1)) to simulate your values. But this function can generate values < 0 and > 1.

How can you achieve a beta distribution that approximates the properties of this normal distribution? You can work out the math yourself (e.g., by looking at the formula for the mean and the variance of the beta distribution) - or you can ask ChatGPT. After all, we only use this function as a tool to get some plausible values.

Tool 3: The beta distribution

“We have a normal distribution with mean=0.2 and SD=0.1. But the results should be bounded on a scale from 0 to 1. Create a beta-distribution that mimics the properties of the described normal distribution.”

“To mimic the properties of a specified normal distribution (with a mean and standard deviation) using a beta distribution within a bounded interval (in this case, 0 to 1), we need to find the parameters of the beta distribution (alpha \(\alpha\) and beta \(\beta\)) that match the mean and variance of the normal distribution as closely as possible.

[snip]

The parameters for the beta distribution that mimic the properties of the described normal distribution (with mean = 0.2 and standard deviation = 0.1, bounded between 0 and 1) are \(\alpha = 3\) and \(\beta = 12\).

This beta distribution should closely match the shape and spread of the specified normal distribution within the bounds of 0 to 1.”

Tool 3: The beta distribution

You can generate random values in R with the rbeta function. Here’s a comparison of a normal distribution and a matched beta distribution that respects the boundaries \([0; 1]\):

x.random <- rnorm(10000, mean=0.2, sd=0.1)
hist(x.random, xlab = "x", ylab="", main="Normal distribution (M=0.2, SD=0.1)", xlim=c(-0.3, 1.1))

x.beta <- rbeta(10000, shape1=3, shape2=12)
hist(x.beta, xlab = "x", ylab="", main="beta distribution (a=3, b=12)", xlim=c(-0.3, 1.1))

Tool 3: The beta distribution

Approximating a normal distribution with a beta distibution

Step 2: Define functional relationships between variables

Of course, the logistic function and the beta distribution are just a start - you can use the full toolbox of mathematical functions to implement your model!

Note

These considerations about functional forms, however, are typically not substantiated by psychological theory or background knowledge - at least at the start of a modeling project. We choose them, because we are (a) acquainted to it, and/or (b) they are mathematically convenient and tractable.

Empirical evidence can inform both your choice of the functional form, and, in a model-fitting step, the values of the parameters.

Step 3: Implement the functions in R

Step 3: Implement the functions in R

Atomic functions

Each R function implements exactly one functional relationship.

Document all input parameters and say what the function computes. Do it inline in the code, so that the documentation is right there where you need it. In R, this can be done with the established roxygen2 documentation standard:

#' Compute the updated expected anxiety
#'
#' @param momentary_anxiety The momentary anxiety, on a scale from 0 to 1
#' @param previous_expected_anxiety The previous expected anxiety, on a scale from 0 to 1
#' @param alpha A factor that shifts the weight between the momentary anxiety (alpha=1) 
#'              and the previous expected anxiety (alpha=0) when the updated 
#'              expected anxiety is computed.

get_expected_anxiety <- function(momentary_anxiety, previous_expected_anxiety, alpha=0.5) {
  momentary_anxiety*alpha + previous_expected_anxiety*(1-alpha)
}

expected_anxiety <- get_expected_anxiety(momentary_anxiety = 0.3, previous_expected_anxiety = 0.6)

Step 3: Implement the functions in R

Next, connect all functions to one “super-function”, which takes all free inputs as parameters and computes the focal output variable(s).

Test the super-function:

  • Enter some reasonable combinations of parameters
  • Draw plots where you vary one parameter on the x-axis and see the behavior of the output variable on the y-axis.

Step 4: Simulate a full data set

Which input variables do your simulated participants bring to the experiment? Real participants bring variability in these variables.

Assume a specific distribution of all input variables that participants bring to the experiment. Even better: find empirical evidence for it, for example from existing open data sets or summary statistics from publications.

Maybe you need to transform the scale of the empirical variable into the scale of your simulated equivalent (e.g., via z-standardization).

Step 4: Simulate a full data set

Once all of these values are defined, simulate a sample of participants (with their starting values), and put half of them into the experimental group, and half of them into the control group. Use the resulting output values to computer a regular t-test.

Now the questions are:

  • Does the simulated model produce the phenomenon? How large is the effect size?
  • What happens if you change some of the functional parameters? Is the phenomenon still there? Is there a point in parameter space where the phenomenon breaks down?

End

Contact

CC-BY-SA 4.0

CC-BY-SA 4.0